3 research outputs found
Effective and efficient structure learning with pruning and model averaging strategies
Learning the structure of a Bayesian Network (BN) with score-based solutions
involves exploring the search space of possible graphs and moving towards the
graph that maximises a given objective function. Some algorithms offer exact
solutions that guarantee to return the graph with the highest objective score,
while others offer approximate solutions in exchange for reduced computational
complexity. This paper describes an approximate BN structure learning
algorithm, which we call Model Averaging Hill-Climbing (MAHC), that combines
two novel strategies with hill-climbing search. The algorithm starts by pruning
the search space of graphs, where the pruning strategy can be viewed as an
aggressive version of the pruning strategies that are typically applied to
combinatorial optimisation structure learning problems. It then performs model
averaging in the hill-climbing search process and moves to the neighbouring
graph that maximises the objective function, on average, for that neighbouring
graph and over all its valid neighbouring graphs. Comparisons with other
algorithms spanning different classes of learning suggest that the combination
of aggressive pruning with model averaging is both effective and efficient,
particularly in the presence of data noise
Open problems in causal structure learning: A case study of COVID-19 in the UK
Causal machine learning (ML) algorithms recover graphical structures that
tell us something about cause-and-effect relationships. The causal
representation praovided by these algorithms enables transparency and
explainability, which is necessary for decision making in critical real-world
problems. Yet, causal ML has had limited impact in practice compared to
associational ML. This paper investigates the challenges of causal ML with
application to COVID-19 UK pandemic data. We collate data from various public
sources and investigate what the various structure learning algorithms learn
from these data. We explore the impact of different data formats on algorithms
spanning different classes of learning, and assess the results produced by each
algorithm, and groups of algorithms, in terms of graphical structure, model
dimensionality, sensitivity analysis, confounding variables, predictive and
interventional inference. We use these results to highlight open problems in
causal structure learning and directions for future research. To facilitate
future work, we make all graphs, models, data sets, and source code publicly
available online